Search Results: "Aigars Mahinovs"

18 August 2015

Aigars Mahinovs: Debconf 15 group photo

The long awaited group photo from Debconf15 is now available: here and here. Due to its spectacular glory, the Google Photos could not handle the massive 52 Mb, and 19283*8740=168.5Mpix of awesomness, so there is only a half-size version. Also I plan to have a lightning talk on Thursday on how exactly such things are made :)

25 September 2014

Aigars Mahinovs: Distributing third party applications via Docker?

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while. For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3). After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about. There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes. Lets play a simple scenario on how third party applications should work on Linux. Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them. The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all. Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process. On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows. For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package. After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead. There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality. Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd? P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

24 August 2014

Vincent Sanders: Without craftsmanship, inspiration is a mere reed shaken in the wind.

While I imagine Johannes Brahms was referring to music I think the sentiment applies to other endeavours just as well. The trap of believing an idea is worth something without an implementation occurs all too often, however this is not such an unhappy tale.

Lars original design idea
Lars Wirzenius, Steve McIntyre and myself were chatting a few weeks ago about several of the ongoing Debian discussions. As is often the case these discussions had devolved into somewhat unproductive noise and yet amongst all this was a voice of reason in Russ Allbery.

Lars decided that would take the opportunity of the upcoming opportunity of Debconf 14 to say thank you to Russ for his work. It was decided that a plaque would be a nice gift and I volunteered to do the physical manufacture. Lars came up with the idea of a DEBCON scale similar to the DEFCON scale and got some text together with an initial design idea.

CAD drawing of cut paths in clear acrylic
I took the initial design and as is often the case what is practically possible forced several changes. The prototype was a steep learning curve on using the Cambridge makespace laser cutter to create all the separate pieces.

The construction is pretty simple and consisted of three layers of transparent acrylic plastic. The base layer is a single piece of plastic with the correct outline. The next layer has the DEBCON title, the Debian swirl and level numbers. The top layer has the text engraved in its back surface giving the impression the text floats above the layer behind it.

Failed prototype DEBCON plaqueFor the prototype I attempted to glue the pieces together. This was a complete disaster and required discarding the entire piece and starting again with new materials.

The final version with stand ready to be presented
For the second version I used four small nylon bolts to hold the sandwich of layers together which worked very well.

Presentation of plaque photo by Aigars Mahinovs
Yesterday at the Debconf 14 opening Steve McIntyre presented it to Russ and I think he was pleased, certainly he was surprised (photo from Aigars Mahinovs).

The design files are available from my design git repo, though why anyone would want to reproduce it I have no idea ;-)

13 June 2014

Aigars Mahinovs: Going to Debconf14

It's that time of the year, again, when I lan to go to Debconf, reserve vacation, get visa waiver, book tickets. Let's hope nothing blocks me from attending this time. It has been too long. Now I just need to finish up photoriver before Debconf :) In fact it is quite close to being ready - I just need to finish up the GPS tagging feature, figure out why FLickr stopped working recently and optionally work on burst detection and/or FlashAir IP address autodetection in the network.

3 April 2014

Aigars Mahinovs: Wireless photo workflow

For a while now I've been looking for ways to improve my photo workflow - to simplify and speed up the process. Now I've gotten a new toy to help that along - a Panasonic FlashAir SD card with WiFi connectivity. I was pretty sure that build-in workflows of some more automated solutions would not be a perfect fit for me, so I got this card which has a more manual workflow and a reasonable API, so I could write my own. Now I am trying to work out my requirements, the user stories if you will. I see two distinct workflows: live event and travel pictures. In both cases I want the images to retain the file names, Exif information and timing of the original photos and also have embedded GPS information from the phone synced to the time the photo was taken. And if I take a burst of very similar photos, I want the uploading process to only select and upload the "best" one (trivial heiristic being the file size) with an ability for me to later choose another one to replace it. There would need to be some way of syncing phone and camera time, especially considering that phones usually switch to local time zone when traveling and cameras do not, maybe the original time the photo was taken would need to be changed to local time zone, so that there are no photos that are taken during the day, but have a timestamp of 23:45 GMT. When I am in Live Event mode I would like the photos that I take to immediately start uploading to an event album that I create (or choose) at the start of the shoot with a preset privacy mode. This assumes that either I am willing to upload via 3G of my phone or that I have access to a stable WiFi network on-site. It might be good if I could upload a scaled down version of the pictures during the event and then later replace the image files with full-size images when the even is over and I am at home in my high-speed network. I probably don't need the full size files on my phone. When I am in Travel mode, I want to delay photo uploading until I am back at the hotel with its high speed Wifi, but also have an option to share some snapshots immediately over 3G or random cafe Wifi. I am likely to take more photos that there is memory in my phone, so I would like to clear original files from the phone while keeping them in the SD card and in the cloud, but still keeping enough metadata to allow re-uploading an image or choosing another image in a burst. Now I need to flesh out the technical requirements from the above and write an Android app to implement that. Or maybe start by writing this in Python as a cross-platform command-line/desktop app and only later porting it to Android when all the rought parts are ironed out. This will have an extra benefit of people being able to run the same workflow on a laptop instead of a phone or tablet. Let's assume that this is written in a pretty flexible way, allowing to plug in backends for different WiFi SD cards, cloud services, plug-in points for things like instant display of the latest photo on the laptop screen in full-screen mode and other custom actions, what else would people love to see in something like this? What other workflow am I completely overlooking?

27 March 2014

Aigars Mahinovs: Photo migration from Flickr to Google Plus

I've been with Flickr since 2005 now, posting a lot of my photos there, so that other poeple from the events, that I usually take photos of, could enjoy them. But lately I've become annoyed with it. It is very slow to uplaod to and even worse to get photos out of it - there is no large shiny button to Download a set of photos, like I noticed in G+. So I decided to try and copy my photos over. I am not abandoning or deleting my Flickr account yet, but we'll see. The process was not as simple as I hoped. There is this FlickrToGpluss website tool. It would have been perfect .. if it worked. In that tool you simply log in to both services, check which albums you want to migrate over and at what photo size and that's it - the service will do the migration directly on their servers. It actually feeds Google the URLs of the Flickr photos so the photos don't even go trought the service itself, only metadata does. Unfortunately I hit a couple snags - first of all the migration stopped progressing a few days and ~20 Gb into the process (out of ~40 Gb). And for the photos that were migrated their titles were empty and their file names were set to Flickr descriptions. Among other things that meant that when you downloaded the album as a zip file with all the photots (which was the feature that I was doing this whole thing for) you got photos in almost random order - namely in the order of their sorted titles. Ugh. So I canceled that migration (by revoking priviledges to that app on G+, there is no other way to see or modify progress there) and sat down to make a manual-ish solution. First, I had to get my photos out of Flickr. For that I took Offlickr and ran it in set mode:
./Offlickr.py -i 98848866@N00 -p -s -c 32
The "98848866@N00" is my Flickr ID which I got from this nice service, then -p to download photos (and not just metadata), -s to download all sets and -c 32 to do the download in 32 parallel threads. An important thing to do is to take all you photos that are not in a set in Flickr and add them to a new 'nonset" so that those photos are also picked up here, there is an option under Organize to select all non-set photos. It worked great, but there were a couple tiny issues:
  1. There is a bug in Offlickr that it does not honor pages in Flickr sets, so it only downloads first 500 images in each set, fix for that is in that bug;
  2. It also wanted Python2.6 for some reason, but worked fine with Python2.7
  3. With that number of threads sometimes Flickr actually failed to respond with the photo, serving a 500 error page instead. Offlickr does not check return code and happily daves that HTML page as the photo. To work around that I simply deleted the HTML errors and then ran the same Offlickr command again so that it re-downloads the missing files. Had to repeat that a few times to get all of them:
ack-grep -l -R "504 Gateway Time-out" dst/   xargs rm
After all that I had my photos, all 40 Gb of them on my computer. Should I upload them to G+ now? Not yet! See the photos all had lost their original file names. It turns out Flickr simply throws that little nugget of information away. It is nowhere to be found, neither in metadata or the UI or the Exif of the photos. Also some of my photos had clever descriptions that I did not want to loose or re-enter in G+ and also geolocation information. Flickr does not embed that info into the Exif of the image, instead it is provided separately - Offlickr saves that as an XML file next to each image. So I wrote a simple and hacky script to re-embed that info. It did 3 things:
  1. Embed title of the photo into the Description EXIF tag, so that G+ automatically picks it up as title of the photo;
  2. Embed the GEO location information into the proper EXIF tags, so that G+ picks that up automatically;
  3. Create a new file name based on original picture taken datetime and EXIF Canon FileNumber field (if such exists), so that all photos in an album are sequential.
It uses exiftool for the actual heavy lifting. After all that was finished I tested the result by uploading a few images to G+ and testing that their title is being set correctly, that they have a sane file name and that geo information works. After that I just uploaded them all. I tried figuring out the G+ API (they actually have it) but I was unable to pass the tutorial, so I abandoned it and simply uploaded the photos of each set int their own tab via a browser. That took a few hours. But that is much faster that with Flickr. Like 4 MB/s versus 0.5 MB/s faster. And here is the result. So far I kind of like it. We'll see how it goes after a year or so. Now on to an even more fun problem - I now have ~40 Gb of photos from Flickr/G+ and ~100 Gb of photos locally. Those sets partially intersect. I know for a fact that there are photos in Flickr set that are not in my local set and it is pretty obvoious that there are some the other way round. Now I need to find them. Oh and I cann't use simple hashes, because Exif has changed and so have the file names for most of them. And not to forget that I often take a burst of 3-4 pictures, so there are bound to be a some near-duplicate photos in each set too. This shall be fun :)

4 December 2013

Aigars Mahinovs: Translation management workflows

Whatever you do with translations, consider translation management issues. For example, you are developing a multilingual web site. All kinds of labels and buttons and form fields are nicely translatable with trans template tag and ugettext. You have po files that follow your code from dev to stage to production environment.
Now you add a CMS into the mix. And suddenly - you translations are in more than one place, in more than one format and follow different routes to production.
Now imagine that you need to add Chinese language to your entire site. The translator is an off-site contractor. What files would you send to him to translate? How would you generate them? How will you integrate them?
If someone adds or changes a page on production in English: how will your developers see that change? how will you know that an updated translation for Chinese is needed? how will you manage the update of the translation?
If you make a CMS and don't have at least the export_po_file and import_po_file management commands, then you are not really multilingual. It is either that or figuring out your own answer for the above questions.
I have finally found a Django-based CMS that has those - http://pythonhosted.org/django-page-cms/ . Have not really tried it yet, but I am hopeful.

21 October 2013

Aigars Mahinovs: Moved to Mezzanine

After my server that has hosted my blog for some years had given out its last breath (second motherboard failure), I decited it was time for a change. And not just server change, but also change in the blog engine itself. As I now focus on Python and Django almost exclusively at work, it felt logical to use some kind of Django-based blog or CMS. I tried django-cms and mezzanine and ... Mezzanine is so fast and simple, that I simply stopped looking. After simply following the tutorial and creating a skeleton project, I had a ready-to go site with all the CMS features, incuding a blog. I just had to change a few settings to have the blog module be the home page of the site, change site settings for the title and Google Analytics settings and such and tweak the theme a bit to my liking. This was my first real exposure to a Bootstrap design. I must say - it is very simple to understand and modify if your needs fit within its limits. For example, I wanted to remove the left sidebar and expand the main content block to fill that. All I had to do was to remove the div element with class "left-sidebar span-2" and change the class of the main content part from "span-7" to "span-9". To do that I simply copied the templates/base.html file from mezzanine default templates and modified it. The information from django-debug-toolbar showed me what files were used in rendering the page. But the feature that really got me hooked was the Wordpress import. Using a simple management command I was able to feed into Mezzanine instance the XML export file from Wordpress. It created blog posts, categories, comments, static pages and even redirects from Wordpress permalinks to Mezzanine permalinks. It was not flawless - there were a few issues: After that was done it was a relatively straightforward process of picking up the code and data and deploying it to a Django-friendly hosting service. There is a plenty of good competition out there, most now offer a simple one-click Django installation, so I just created a simple Django skeleton via their web interface and then replace what they generated with my app while keeping their settings as local_settings.py . I should probably write a bit more details about the process. After I create a custom fabric file for this. It is quite a strange feeling to have a Mezzanine blog that responds faster from a shared server half a continent away, compared to a Wordpress on dedicated server in the same room There are a few features that I am still missing - most notably draft post autosave. That has bitten me hard while writing this post :P Also a Twitter digest post feature. But on the bright side - that is a great motivation to write such features, preferably in a portable way that other people can use too :)

Aigars Mahinovs: Moved to Mezzanine

After my server that has hosted my blog for some years had given out its last breath (second motherboard failure), I decited it was time for a change. And not just server change, but also change in the blog engine itself. As I now focus on Python and Django almost exclusively at work, it felt logical to use some kind of Django-based blog or CMS. I tried django-cms and mezzanine and ... Mezzanine is so fast and simple, that I simply stopped looking. After simply following the tutorial and creating a skeleton project, I had a ready-to go site with all the CMS features, incuding a blog. I just had to change a few settings to have the blog module be the home page of the site, change site settings for the title and Google Analytics settings and such and tweak the theme a bit to my liking. This was my first real exposure to a Bootstrap design. I must say - it is very simple to understand and modify if your needs fit within its limits. For example, I wanted to remove the left sidebar and expand the main content block to fill that. All I had to do was to remove the div element with class "left-sidebar span-2" and change the class of the main content part from "span-7" to "span-9". To do that I simply copied the templates/base.html file from mezzanine default templates and modified it. The information from django-debug-toolbar showed me what files were used in rendering the page. But the feature that really got me hooked was the Wordpress import. Using a simple management command I was able to feed into Mezzanine instance the XML export file from Wordpress. It created blog posts, categories, comments, static pages and even redirects from Wordpress permalinks to Mezzanine permalinks. It was not flawless - there were a few issues: After that was done it was a relatively straightforward process of picking up the code and data and deploying it to a Django-friendly hosting service. There is a plenty of good competition out there, most now offer a simple one-click Django installation, so I just created a simple Django skeleton via their web interface and then replace what they generated with my app while keeping their settings as local_settings.py . I should probably write a bit more details about the process. After I create a custom fabric file for this. It is quite a strange feeling to have a Mezzanine blog that responds faster from a shared server half a continent away, compared to a Wordpress on dedicated server in the same room There are a few features that I am still missing - most notably draft post autosave. That has bitten me hard while writing this post :P Also a Twitter digest post feature. But on the bright side - that is a great motivation to write such features, preferably in a portable way that other people can use too :)

12 August 2013

Aigars Mahinovs: Soon to be on my way to Debconf too

Going to Debconf13
I also soon will be on my way to Debconf13 and those who asked for more photos from the place on Planet Debian will soon start getting their fill ;) Privacy note: as always, I will be trying to balance the need for people to get lots of pictures as fast as possible with privacy of people, who do not want to be in published photos. The way this has been done before and will be done this time (at least from my side) is that: Update: http://www.aigarius.com/blog/2013/08/12/not-going-after-all/

Aigars Mahinovs: Not going after all

I was very excited to go to Debconf13, but in the last minute I caught some kind of virus and went down with fever in the night before the flight. I am sure Wouter will take over group photo duties. And I will just take this week of vacation to rest up and check out the video streaming :)

16 April 2013

Aigars Mahinovs: na 6 internets

na ir slavena ne tikai ar savu akmens m ri valsts zieme os, bat ar ar Di o nas Ugunsm ri apk rt visam s valsts Internetam, kas blo visu p c k rtas un iebremzina visu p r jo. Man pirm person g saskarsme ar o pakalpojumu notika jau anhajas lidost , kad tri vien izr d j s, ka valst blo ts ir ne tikai Facebook, bet ar Twitter, kas iev rojami apgr tin ja manas iesp jas tri un viegli apzi ot visus, ka es esmu v l joproj m esmu dz vs un vesels. P c p ris eksperimentiem izr d j s, ka, lai ar no telefona nav pieejama Google+ m jas lapa un nav lejupl d jama Google+ (un WhatsApp) programma uz Android, tom r, ja t s jau ir telefon , ie abi servisi turpina no telefona str d t. T p c es s ku rakst t ce ojuma piez mes Google+ un da as dienas p c ce ojuma s kuma man pat izdev s nokonfigur t If This Then That servisu, lai tas pa em manus Google+ ierakstus un uztaisa no tiem Twitter ierakstus (kas jau t l k pa citiem kan liem izplat s uz Facebook un Draugiem un ar par d s k ned as kopsavilkums aj blog ). Google+ ir savi plusi, bet ar savi m nusi. Galvenais m nuss, ko es aj ce ojum paman ju ir tas, ka Google+ Android aplik cij nav iesp jams sagatavot vair kus ierakstu melnrakstus (v lams katru ar savu geolok ciju) bez Interneta var rakst t tikai vienu ierakstu un t ieraksta GPS koordin tes b s t s kur viet p c tam Internets par d sies. Es jau uzrakst ju Googlei par o probl mu. Galvenais pluss Google plusam (no pun intended) ir Instant Upload ja bild t fotogr fijas ar Android telefonu, s fotogr fijas autom tiski tiks aug upiel d tas un par d sies jaun ieraksta izveides interfeis , kur t s var pievienot ierakstam ar vienu klik i bez jebk das gaid anas. Diem l tas nestr d ar norm laj m kamer m. Pagaid m ;) Ta u es neb tu sts datori is, ja es nepam in tu uzlaust vai apiet o nelielo nas probl mu, ne? ;) Visvienk r akais veids k apiet nas Lielo Ugunsm ru ir izmantot jebk du VPN, kas at auj ne tikai piek t VPN t kla resursiem, bet ar at auj laist visu trafiku caur o VPN savienojumu. dus VPN piesl gumus var nopirkt, vai (ja ir Linux serveris vai routeris rpus nas) izveidot pa am. Man gad jum tas bija ar vienu klik i iesl gts OpenVPN uz Fonera routera, kas st v man s m j s. Diem l na ir sav da. Blo to lapu, portu un protokolu saraksts main s gan da dos rajonos, gan ar atkar b no t vai Internets ir mob lais vai wifi vai ar piesl gumu, gan ar vienk r i no dienas dien . Liel da gad jumu blo to lietu sarakst iekr t ar VPN savienojumi. Bie i vien ar priv tie. Man kaut k ne iet, ka mana m jas IP adrese ir nas ugunsm ra sarakstos, ta u da reiz ar tam VPN pievienoties es nevar ju. Un t d s situ cij s, lai apskat tu k du YouTube video, atliek tikai viens, eni ls risin jums sshuttle! is eni lais r ks izveido ko l dz gu VPN savienojumam caur parasto SSH portu un protokolu. Uz lok l s ma nas ir nepiecie ams Python un root ties bas, bet uz servera ir vajadz gas tikai ties bas palaist Python programmas. sshuttle pats aizs ta sevi uz serveri un palai s tur, pat ie ifr un p rs ta visus savienojumus un ar DNS piepras jumus, ja vi am to paprasa. Var p rs t t konkr tus t klus vai visu trafiku. Un trums man pieredz tas ir bija pat tr ks par parasto VPN. Kopum Interneta blok de un t visp r gais l nums ir viens oti spec gs m nuss nai. Aizskrienot mazliet uz priek u st st jum pateik u, ka Hong Kong das probl mas nav tur Internets ir lielisks! L k t ds t zeris san ca :)

18 February 2013

Aigars Mahinovs: Istabai.lv installation

So, the price of heating my apartment has gone up significantly since last year and a lot of people have noticed the same trend. As a geek, I want not just any solutions, but geekiest solutions possible to that enter a Smart Home system. The heating in my apartment is separated from all other apartments with a separate heat meter that measures both heating water flow and the temperature difference on the incoming and outgoing pipes, so if I reduce the heating consumption I will immediately see that in lower heating bills. This particular smart home system is very simple and made in Latvia and also relatively cheap, so I decided to give this a try. The start was very simple I filled out a form on their home page and got a call the next day to verify the specifics. They already knew my building and had experience installing there, so they offered me two options: I could have two thermal sensors and a radiator controller on each of the 3 radiators in my apartment or (seeing how my apartment is relatively small and well ventilated thus reducing the efficiency of per-room control) one thermal sensor and one radiator controller installed on a new valve that they would build into the heating pipe just outside my apartment. I chose the second option as that was cheaper, simpler and involved less installation for me. Also they offered a nice discount. A plumber of my building installed the valve at some point during the day, I did not even have to be there or arrange anything special, it was all included in the main bill. Then the next day a friendly guy came by to my office and gave me three very cute boxes. The packaging was very, very Apple-like. The three devices were named mother , spy and puller basically that is a base station, a temperature sensor and a radiator regulator a wireless stepper motor. The installation was straight forward I connected the base station (something very similar to this thin in size and looks) to Ethernet and power, went to the web address specified on the sticker on the base station, completed a registration process there, entered a code from the base of the base station into the website and it just found it. After that I pressed a button on the temperature sensor (a Zippo lighter sized box with a built-in Li-Ion battery rechargeable via microUSB once a year or so) and put that on a shelf. I secured it with some double-sided tape after the cat got to it once and played some hockey while I was away. The temperature sensor immediately showed up in the Web UI and I could assign it to a room. The regulator installation was the most complex bit, but it was trivial as well all I had to do was insert some AA batteries (helpfully included in the box, should last a couple years by their estimates), then go outside to the newly installed inlet valve, put this thing on, screw the thread with my hands and then push and hold a button for 10 seconds. After that the controller came to life, connected with the base station and started spinning the valve, testing its operational range I presume. The range is very good you can see 13% signal in the screenshots that is across the whole of the apartment, trough 3 solid walls and another 5 meters out in the hallway inside another wall. As soon as that was done, the Web UI showed the controller and asked my to assign it to a room. So now I have 1 room defined with 1 temperature sensor and one controller. That is all that is needed. Temperature From the charts I have already found out that at current inside/outside temperatures my apartment cools down at around 0.3 degrees C per hour and with current radiator settings it heats up by around 1 degree C per hour. I should turn the radiators up a bit thus increasing the rate at which the system is able to heat things up. Also I should try to re-inspect all possible cold spots to see why the cooldown speed is so high. I expected it to be lower. At least now I have real data to compare things. Temperature chart At this point you can either fire up your web browser or a Android/iOS client on you mobile device to monitor and control the temperature of the apartment. But that was not enough for me I opted for the PRO upgrade which includes the ability to program a weekly cycle of desired temperatures. The programming is limited and thus simple to do there are three temperature modes: day, night and eco. The idea is that you have the night mode every night, then at some time during the day, you have the day mode (to wake up), then you switch to eco mode (when noone is home) and then in the evening you go back to day mode and later to night mode. There are separate sliders for each day of the week and temperature settings for each of the modes for every programmed room. Time zones Temperatures in zones Devices Total investment so far 210 Ls (or around 3 months worth of just heating bills). The experience of other users tells of 30-50% heating bill reduction, so this should pay for itself in 2-3 years. In addition I get a more stable and predictable temperature at home and some more geeky stuff to brag about yay! These guys have their work cut out for them I have no idea how/of they do international orders, I would definately like to see more information in the web site about how my system is working (more temperature history, valve state history, wireless signal and battery history, mobile apps are a bit wonky it feels like the mobile app does not respect the temperature set over the web and tries to reassert itself when opened and it often shows out-of-date temperature data with no indication that it is out-of-date + the mobile apps are missing charts. All in all this is a great start: easy, quite cheap and stylish home heating automation. Good work! P.S. I am pretty sure they use Linux, but it is so packaged, that there is basically no way to confirm or deny that. P.P.S. Also check out this overnight chart, there is definitely some mode switch anticipation going on :)
Overnight temperatures

6 February 2013

Aigars Mahinovs: Pensijas prognoze 18.43

Update: I apologise for spamming all the nice people on Planet Debian with this unrelated post in foreign language it got mis-tagged and (although I tried classifying it correctly and removing it from the separate Debian-only RSS feed going to the Debian Planet) Planet software just keeps it around until admins get a spare moment to remove it manually. The basic idea of the post is that there is a e-government service by the Latvian government that provides any citizen a way to check what their pension would be, but it is kinda useless, because it calculates what you pension would be if you decided to retire right now. So in the post below I provide a few simple steps on how to estimate what the real pension could be (in todays currency values) if a person kept paying into the system at the same rate until the normal retirement age. Latvija.lv ir r ks savas n kotnes pensijas prognoz anai, ta u tas ir, maigi izsakoties, bezj dz gs tiem, kas nepl no iet pensij tuv k gada laik . tri apskatoties atteic gos likumus un noteikumus es nonacu pie lietder g kas metodes. Vispirms m s varam dro i ignor t pensiju indeks ciju un visas ar to saist t s pensiju kapit la p rr in anas vai iemaksu palielin anos ir skaidrs, ka gadiem ejot uz priek u b s kaut k ds infl cijas procents, kura d palielin sies algas, palielin sies iemaksas pensiju kapit l un tiks p rr in tas vis das izmaksas, ta u t pati infl cija tie i ietekm s ar to cik v rt gs b s katrs gala pensijas lats taj br d , kad pensija tiks sa emta, l dz ar to infl cijas efekts tiek neitraliz ts, ja m s v rt jam gala ciparu br a cenu kontekst t.i. ja gal san ks, ka pensija b s 400Ls, tad to vai tas ir daudz vai maz j dom br a cen s, nevis m inot izdom t k das b cenas p c 30 gadiem. To visu emot v r pensijas apr ins stipri vienk r ojas:
  1. Pa emam atskaiti no t Latvija.lv pakalpojuma. Tas mums dos uz o br di uzkr to pensijas kapit lu (U) un aptuvenu ciparu par to k ds ir kapit la ikgad jais piaugums (P)
  2. Apr inam cik v l gadi ir paliku i l dz pension an s vecumam (t) t.i. l dz 62 gadiem. Es pat r in tos ar to, ka ap to laiku m r a vecums jau b s 65.
  3. Sar inam to cik b s aptuvenais pensijas kapit ls pension anas vecuma, ja m s turpinam iemaksas k tagad K = U + ( P * t )
  4. Apr inam k da san k ikm ne a pensija no da pensijas kapit la Pensija = K / G / 12, kur G ir emts no pension an s vecumam atbilsto s rindi as MK noteikumos, piem ram 62 gadiem tas ir 18.43
Piez me: G ir statisti u apr in ta prognoze par to cik vid ji gadus Latvijas iedz vot ji nodz vo ja vi i ir sasniegu i attiec go pension an s vecumu, t.i. t ir prognoze par to cik gadus vid ji jums b s tas prieks baud t pensiju. Ja rezult ts san k par k mazs, tad j em v r , ka summa neiek auj dividendes no 2 un 3 pensijas l me a, kas p ris gadu desmitu laik var b t t ri iespaid gs. Da as prognozes ir t das, ka 2ais un 3ais pensijas l menis kop var tu dod v l aptuveni t du pa u pensiju kl t k 1ais l menis. Ta u tas ir stipri atkar gs no fondu tirgus ilglaic gas dinamikas. Man to visu apr inot san k ap 400 Ls pensijas prognoze. Ja piepild s paredz jumi par 2o un 3o pensijas l meni, tad, ja es iedom jos sevi tagad k pension ru ar 600-800 Ls pensiju zin nav ne vainas. :)

23 September 2012

Aigars Mahinovs: Cloning or pre-configuring a batch of Android phones

An interesting question popped up in my Twitter stream today is there an Android alternative to Apple configurator (for iPhone, iPad and iPod Touch) that allows to create a bunch of identical Apple devices with some added configurations and applications. The best I could come up with is not as polished, but on the other hand much more powerful option Nandroid backup and restore (also known as ClockworkMod Recovery backup). To do this, you need a source phone and a bunch of destination phones. On the source phone a ClockworkMod Recovery (CWR) must be installed (either via a root or via a bootloader unlock). On the destination phones you will either need to also install ClockworkMod Recovery or unlock the bootloader to allow the Fastboot tool to work. So the procedure then goes as follows:
1. Unlock the bootloader on the source phone (search for device specific info on how to do that)
2. Use fastboot flash recovery filename.img to write a device-specific version of CWR to the recovery partition of the device (the phone must be in fastboot mode at that point)
3. Do whatever customisations you want to the source phone at this point. You can use custorm ROMs, install whatever applications, do whatever configuration, but I would suggest keeping everything in phone memory so you don t have to flash the SD card as well.
4. Reboot the source phone into CWR mode, use the backup option to create a full backup.
5. The backup will now be in the SD card of the source device. Copy that to the computer that you will use for creating copies. For each destination phone:
6. Unlock the bootloader of the destination phone
7. Reboot the phone in the fastboot mode, connect it via USB to the copying computer
8. Flash all partitions from the backup using fastboot flash ... commands, skip flashing the recovery partition if you don t want CMR on the destination device
9. Re-lock the bootloader (with fastboot oem lock) And you are done! All the phones must be of the same model. And that model must support unlocking bootloader for this to work. I prefer this method, because this way it is possible to create an end device without root, with a locked bootloader and without CWR thus providing some security as unlocking the bootloader wipes the device, so without specific hacking an attacker can not easily get access to system data on such device. It is also possible to do this with devices that do not have an boot unlock if there is a way to root the original firmware which allows to install CWR and go on from there, but that is significantly more complicated and time-consuming, so using devices with an ability to unlock the bootloader is much preferable. However, before you go to all that trouble, it might be worth to consider if maybe for your particular use case it would be enough with the two commands from Android SDK adb install ... and adb push ... to install applications or individual files on devices or adb backup AppName and adb restore ... to backup and restore one or more individual applications with all their settings. These options have the benefit of that they will work across different device models and that they do not wipe other data or applications from the devices. As I could not immediately find a better way or even a detailed guide how to do this, I decided to write this post, so it would be easier for other people to find this information. If you know a better way, please do mention it in the comments section!

20 September 2012

Aigars Mahinovs: You though NVidia was bad, don t try AMD

Remember when Linus Torvalds lambasted NVidia for not supporting their Optimus technology in their Linux drivers for half a decade and counting? Well, I went out and bought an AMD/ATi video card as mu upgrade. And you know what? Its Linux drivers are far, far worse than NVidia.
1. Most of the games I had working fine on NVidia, do not work on AMD. And those that do suffer far more visual corruption, synchronization bugs (like bottom 40% of the screen rendering half a second after the top 60%), strange visual artifacts (weird triangles popping out of everywhere) and crashes, lots of crashes.
2. There were crashes with NVidia too, but NVidia never managed to crash Compiz along with it or crash the whole X server or lock up the system so far that only SysRq works or even lock up the system so far that only powering it off manually works.
3. And then there is the configuration atrocity. Apparently AMD is too good to store its configuration in /etc/X11/xorg.conf. Or even to document the supported options there. Instead they have their own (also undocumented) configuration file in /etc/ati folder. And it is undocumented because it is a cryptic mess and the only supported way to change it is to use their tools aticonfig and amdccccle. The command line tool is almost reasonable, except it is also barely documented. For example, one of my screens somehow was always stared at 1920 1080@30Hz. There were 3 different ways to specify default resolution, but none of them used or saved the refresh rate. And when I changed it in the GUI tool the refresh rate did change, but it was never saved. Oh there nowhere is a save button. It just works , except when it doesn t. Like: both of my screens for some reason started with huge black borders around the screen, I finally narrowed it down to the GUI setting overscan which defaulted to 10%. Ok, so I change it, it works, but next time I reboot, the overscan is back! I had to find an undocumented invocation of the aticonfig that would change the default value to 0%. Why did this one setting not save? Oh and fun note the refresh rate of that second screen was correct on the login screen, but it then swiched back as I logged in. Fun, huh?
4. Even at basic desktop tasks fglrx if inferior to not only the free driver, but also to the nvidia driver even simple scrolling of a large folder in nautilus seems to tax the 200$ card to its limits the bottom row blinks into place almost half a second after I stop scrolling. Another example with NVidia when I switch my TV to the HDMI input from the card, the sound starts at the same moment as the picture, however with AMD the sound only decides to show up 10-15 seconds later. And sometime it does not show up at all, unless I start the AMD Control GUI tool and only then the sound shows up 15 seconds later (without doing anything in the GUI). It may be that one part of AMD is better than NVidia at talking to free driver developers, but another part is so much worse at actual technical work of writing a driver, it is not even funny. They are busy reinventing the bycicle of configuration and display management, while their core driver is just not good enough. TL/DR: Anyone wanna buy a HD 6850 cheaply off my hands? Crossposted to Google+ https://plus.google.com/u/0/107099528362923100900/posts/6PC7RFQpm8K P.S. I also noticed the color difference with NVidia there was no difference between colors on my LCD TV and my IPS monitor, but with AMD there is a huge difference, the TV color just got washed out. I guess there is no proper color calibration support in the AMD driver? Update: I have managed to return the HD 6850 to the shop where I got it from (thanks to a nice law requiring web shops to take stuff back within 14 days no questions asked) and got the new NVidia GeForce GTX 660 instead. I had to build an updated NVidia driver (304.48 from this post, just like here or here) but other than that it was smooth and painless and everything is working great again. Only even faster :)

8 August 2012

Aigars Mahinovs: Fun with the new camera

Triple hit! So, I finally received my Canon 650D and Canon EF 40mm STM lens. I am very happy with the purchase. The STM lens is very, very compact, it is barely bigger than the lens cap that comes with the camera. The focusing is fast enough and it is very smooth and silent enough not to be audible in video mode. The largest bonus of the 650D compared to my old 550D is the articulated display. It has come in handy many times already. It is especially useful with the STM lens, live view and touch to focus function. Very useful to focus on something even when the camera is at an weird angle. And a couple days ago I got to really use the new camera capturing the shot above (an others in that set). It turns out capturing cool lightning shots is quite easy: It is basically the same as shooting photos of a fireworks display, with the exception that lightning is not so bright sometimes and that there is a high chance of rain in a thunderstorm ;) P.S. Got over 5k photos in Flickr now with this set, and closing in for 500k views. That feels cool :)

21 June 2012

Aigars Mahinovs: Fish-ing for tips

I am trying to get used to fish as a default shell. I like some things in it, but can not quite get used to other things, so I wonder maybe I am cooking it wrong? So here are things that I could not find a solution for, while switching from bash. I do like the colors, the shortened cwd in the prompt, the univeral history and univeral variables, lower memory usage and funkier completions. Any other fish features people like? Especially I am interested in features that are not enabled by default.

11 June 2012

Aigars Mahinovs: New camera research

So, I was reading the coverage of the newly announced Canon 650D last week and it so happened that a friend needed a camera, so I sold my old Canon 550D and started looking for a replacement. For the last few years 95+% of the photos I have taken were with Canon 35mm f/2.0 lens on my Canon 550D, sometimes helped with a bounce&swivel flash. Which is very close to 50mm in full-frame equivalent. So I am looking to continue shooting in a similar way with a near-50mm prime lens and sometimes with bounce flash. As a given I have: Canon 35mm f/2.0, Canon 50mm f/1.8, Canon Speedlite 430EX II (bounce&swivel flash) and a budget of around 1k USD. It is a fun time to be looking around for a new camera body, so I found that I have many interesting options: Option 1: safe bet. Wait a bit and get the new Canon 650D body and continue shooting with the 35mm and the Canon flash. Pros: safe, new tech, good video. Cons: rather bulky and heavy, boring. Option 2: old school. Get a used Canon 5D Mark II body that pros are selling off to buy their Mark III and then start taking photos with the Canon 50mm f/1.8 lens and the Canon flash. Sell Canon 35mm f/2.0 to cover the price difference between bodies. Pros: full frame, higher resolution, huge viewfinder. Cons: very heavy and bulky, evolutionary dead-end, used, bad corner performance even with best lenses. Option 3: new wave. Sell the rest of my Canon stuff and buy into one of the new mirrorless cameras. The problem with this approach is that I can not find a mirrorless camera with all of: 18+ Mpix, sharp and fast near-50mm-equivalent prime autofocus lens, bounce&swivel external flash. Pros: light and small, new tech, funky. Cons: lens fragmentation, bad adapters (no AF), bad flashes. So, basically I am torn three ways. Any advice? Which mirrorless camera system would best match my needs? P.S. Due to work issues and the long distance, I am *not* attending this years Debconf.
P.P.S. To keep this slightly more on-topic how is the RAW files support on Linux for the new crop of the mirrorless cameras?

8 March 2012

Raphaël Hertzog: People Behind Debian: Gregor Herrmann, member of the Perl team

Photo by Aigars Mahinovs

I followed Gregor s evolution within Debian because I used to be somewhat active in the Perl team. His case is exemplar because it shows that you don t need to be an IT professional to join Debian and to make a difference. His QA page is impressive with hundreds of packages maintained and hundreds of non-maintainer uploads too. While he started out slowly, I remember meeting him at Debconf 7 in Edinburgh and after that he really got more implicated. Again a case of someone joining for technical reasons but getting more involved and staying there for social reasons! :-) Let s jump into the interview and learn more about him. Raphael: Who are you? Gregor: I m 41 years old, and I live in Innsbruck, Austria, in a shared apartment with a friend of mine. In my day job, I m working at the regional addiction prevention agency, so I m one of the few Debian guys who s not an IT student or professional. I started maintaining packages in 2006, and I am a DD since April 2008. Raphael: How did you start contributing to Debian? Gregor: After having used Debian on servers for some years, I finally switched to it on the desktop after some procrastinating. Soon afterwards I wanted to know more about the making-of , started to join mailing lists, filed bugs, and tried to learn packaging. Luckily I quickly found a permanent sponsor Tony Mancill, and we re still co-maintaining each others packages. And when I packaged my first Perl modules, Gunnar Wolf invited me to join the Debian Perl Group, an offer I accepted a few days later. And I m still there :) Later, the NM process, although it involved some waiting times, was also a good learning experience due to my AM Wouter Verhelst. (And in the meantime the organization of the NM process has vastly improved, from what I hear.) So my starting point for joining Debian was my curiosity but what really helped me to find my way into the project was the support of the people who invited and helped me. Raphael: What s your biggest achievement within Debian or Ubuntu? Gregor: I m not sure I can name a single big achievement but I guess I can say that my contributions to the Debian Perl Group have helped to make and keep the team a success story. Raphael: The pkg-perl team seems to work very well. As an active member, can you explain us how it is organized? How do you explain its success? In particular it seems to be a great entry point for new contributors. Gregor: The team is huge, both in numbers of members and packages (over 2200). Since last DebConf we manage our source packages in git, we have 2 mailing lists and an IRC channel, and we manage to keep an overview by using PET, the Package Entropy Tracker. It s true that we get new members on a regular basis; we try to invite people (like it happened to me 6 years ago :) ) but there are also quite a few new contributors who find our docs and introduce themselves on the mailing list. Maybe someone should conduct a study and ask them what motivated them to join. :) We hand out group membership/commit access quickly, and we try to mentor new contributors actively during their early times in the group. Some of them leave for other projects after some time, but many also stay and become DDs later. I m not sure what the reasons for the group s success are, maybe a combination of: For everyone interested in joining the Debian Perl Group, our Welcome page on the wiki is a good starting point. Raphael: What are your plans for Debian Wheezy? Gregor: Nothing overly exciting. What I should do is getting a newer JabRef into Debian (which involves packaging some new Java libraries any takers?). A solution for libdatetime-timezone-perl (which ships timezone data converted to Perl modules and tends to get outdated when the timezone data change) would be nice; let s see if #660404 leads to some results And some Perl packages will also need a bit of work for the hardening build flags release goal (cf. #657853). Raphael: What s the biggest problem of Debian? Gregor: Inertia. While I really like the fact that Debian is a volunteer project, and that every contributor works when and on what they decide to work on, I get the feeling that Debian could do better in moving forward, innovating, taking decisions. I also think that more uniformity in managing source packages would make things easier; it s quite amazing to see how many source formats, packaging helpers, patch systems, RCSs etc. are used all over the archive. I m not advocating for mono-cultures, and I consider this diversity a strength in general, but having to find out first how this particular package works when preparing a bug fix can be annoying. On the bright side, I think that the myth Debian and its mailing lists are mostly about flames can be seen as dispelled in the meantime. Sure, sometimes the tone could be a bit more civil, but in general most of the interactions I ve seen in the last years were friendly and helpful. IMO, the Debian Project consists of mostly nice and cooperative people, and that s what makes it fun for me. Raphael: You re one of the most dedicated participants to RCBW (Release Critical Bugs of the Week), an initiative to fix RC bugs every week. How much time do you spend on it? What would you advise to people who are considering to join the movement? Gregor: I got into the habit of fixing RC bugs after having been invited to my first Bug Squashing Party in Munich some years ago. During this weekend I saw that fixing RC bugs can be fun, is often not that difficult, and gives a warm fuzzy feeling :) I can definitely recommend attending a BSP if one happens to be organized near you. After tasting blood at this first BSP I tried to continue looking at RC bugs, and I guess I spend something around half an hour per day on it. I usually blog about it once a week, in order to motivate others to join in. And joining is easy: just take a look at the tips people like Zack, Vorlon, or me have written. You don t have to be a DD to help, many of my NMUs are based on patches that others kindly prepare and send to the BTS kudos! Another nice aspect is that the RC bug list contains problems from different fields: general packaging problems, language-specific issues, policy violations, etc. So there s something for everybody, and you don t have to be an expert in all fields to fix a specific bug. What s rewarding about fixing RC bugs is not only the feeling of accomplishment and the knowledge about having helped the next release I also received quite a few Thank you mails from maintainers who were busy at that time and appreciated the help. Raphael: Do you have wishes for Debian Wheezy? Gregor: Well, there s not so much left of the Wheezy release cycle if we manage to freeze in June :) Some quick thoughts for Wheezy and Wheezy+1: Raphael: Is there someone in Debian that you admire for their contributions? Gregor: There are many people in Debian I admire, too many to name them all. The first one that comes to my mind is Russ Allbery who not only does great work from lintian to Debian policy but who also sets a great example of communicating in a perfectly polite and respectful way even in heated discussions.
Thank you to Gregor for the time spent answering my questions. I hope you enjoyed reading his answers as I did. Note that older interviews are indexed on wiki.debian.org/PeopleBehindDebian.

Subscribe to my newsletter to get my monthly summary of the Debian/Ubuntu news and to not miss further interviews. You can also follow along on Identi.ca, Google+, Twitter and Facebook.

2 comments Liked this article? Click here. My blog is Flattr-enabled.

Next.

Previous.